THE DANTZIG SELECTOR : STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN
نویسندگان
چکیده
s n log p, where s is the dimension of the sparsest model. These are, respectively, the conditions of this paper using the Dantzig selector and those of Bunea, Tsybakov and Wegkamp [2] and Meinshausen and Yu [9] using the Lasso. Strictly speaking, Bunea, Tsybakov and Wegkamp consider only prediction, not l2 loss, but in a paper in preparation with Ritov and Tsybakov we show that the spirit of their conditions is applicable for l2 loss as well. Since these authors emphasize different points and use different normalizations, I thought it would be useful to present them together. Write the model as
منابع مشابه
DISCUSSION : THE DANTZIG SELECTOR : STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN
given just a single parameter t . Two active-set methods were described in [11], with some concern about efficiency if p were large, where X is n× p . Later when basis pursuit de-noising (BPDN) was introduced [2], the intention was to deal with p very large and to allow X to be a sparse matrix or a fast operator. A primal–dual interior method was used to solve the associated quadratic program, ...
متن کاملDISCUSSION : THE DANTZIG SELECTOR : STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN
1. Introduction. This is a fascinating paper on an important topic: the choice of predictor variables in large-scale linear models. A previous paper in these pages attacked the same problem using the " LARS " algorithm (Efron, Hastie, Johnstone and Tibshirani [3]); actually three algorithms including the Lasso as middle case. There are tantalizing similarities between the Dantzig Selector (DS) ...
متن کاملThe Dantzig selector : statistical estimation when p is much larger than
In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax+ z, where x ∈ R is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n p, and the zi’s are i.i.d. N(0, σ). Is it possible to estimate x reliably based on the noisy data y? T...
متن کاملTHE DANTZIG SELECTOR : STATISTICAL ESTIMATION WHEN p IS MUCH LARGER THAN n
In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Xβ + z, where β ∈ R is a parameter vector of interest, X is a data matrix with possibly far fewer rows than columns, n p, and the zi’s are i.i.d. N(0, σ). Is it possible to estimate β reliably based on the noisy data y? ...
متن کاملMulti-Stage Dantzig Selector
We consider the following sparse signal recovery (or feature selection) problem: given a design matrix X ∈ Rn×m (m À n) and a noisy observation vector y ∈ R satisfying y = Xβ∗ + 2 where 2 is the noise vector following a Gaussian distribution N(0, σI), how to recover the signal (or parameter vector) β∗ when the signal is sparse? The Dantzig selector has been proposed for sparse signal recovery w...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007